validation loss
- Asia > Middle East > Jordan (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Florida > Orange County > Orlando (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology (0.46)
- Health & Medicine (0.46)
- Government (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.92)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.67)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Finland (0.04)
- (2 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > California (0.04)
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
ad991bbc381626a8e44dc5414aa136a8-Supplemental-Conference.pdf
Figure 1 shows the change of accuracy under different cutoff value. However, for gender classification under CelebA dataset, thetrade-offbetweenλval and accuracyisnotveryclear;and wesuspect that under suchscenario, focusing on hard samples does not harm the performance of easy samples, and thus benefits the classifier. Figure 1 shows the change of fairness (equalized odds) under different cutoff value. Suppose we have a large unlabeled training set of sizeN and a small labeled validation set { xvalj,yvalj,1 j M} with M N. In each training step, we sample a small mini-batch of size n(n < N) from training set and perform random augmentation twice to obtain a subset { xi,1 i 2n} and we update the contrastive encoderf with parameterθ. During validation, we freeze the contrastive encoder and train a downstream linear classifierg with parameterω for classification task.